6,987 research outputs found

    Young v. UPS and the Evidentiary Dilemma

    Get PDF

    Transits and Lensing by Compact Objects in the Kepler Field: Disrupted Stars Orbiting Blue Stragglers

    Full text link
    Kepler's first major discoveries are two hot objects orbiting stars in its field. These may be the cores of stars that have each been eroded or disrupted by a companion star. The companion, which is the star monitored today, is likely to have gained mass from its now-defunct partner, and can be considered to be a blue straggler. KOI-81 is almost certainly the product of stable mass transfer; KOI-74 may be as well, or it may be the first clear example of a blue straggler created throughthree-body interactions. We show that mass transfer binaries are common enough that Kepler should discover ~1000 white dwarfs orbiting main sequence stars. Most, like KOI-74 and KOI-81, will be discovered through transits, but many will be discovered through a combination of gravitational lensing and transits, while lensing will dominate for a subset. In fact, some events caused by white dwarfs will have the appearance of "anti-transits" --i.e., short-lived enhancements in the amount of light received from the monitored star. Lensing and other mass measurements methods provide a way to distinguish white dwarf binaries from planetary systems. This is important for the success of Kepler's primary mission, in light of the fact that white dwarf radii are similar to the radii of terrestrial planets, and that some white dwarfs will have orbital periods that place them in the habitable zones of their stellar companions. By identifying transiting and/or lensing white dwarfs, Kepler will conduct pioneering studies of white dwarfs and of the end states of mass transfer. It may also identify orbiting neutron stars or black holes. The calculations inspired by the discovery of KOI-74 and KOI-81 have implications for ground-based wide-field surveys as well as for future space-based surveys.Comment: 29 pages, 6 figures, 1 table; submitted to The Astrophysical Journa

    essHi-C: Essential component analysis of Hi-C matrices

    Full text link
    Motivation: Hi-C matrices are cornerstones for qualitative and quantitative studies of genome folding, from its territorial organization to compartments and topological domains. The high dynamic range of genomic distances probed in Hi-C assays reflects in an inherent stochastic background of the interactions matrices, which inevitably convolve the features of interest with largely aspecific ones. Results: Here we introduce a discuss essHi-C, a method to isolate the specific, or essential component of Hi-C matrices from the aspecific portion of the spectrum that is compatible with random matrices. Systematic comparisons show that essHi-C improves the clarity of the interaction patterns, enhances the robustness against sequencing depth, allows the unsupervised clustering of experiments in different cell lines and recovers the cell-cycle phasing of single-cells based on Hi-C data. Thus, essHi-C provides means for isolating significant biological and physical features from Hi-C matrices.Comment: 14 pages, 4 figures. This is the Authors' Original Version of the article, which has been accepted for publication in Bioinformatics published by Oxford University Pres

    MarciaTesta: An Automatic Generator of Test Programs for Microprocessors' Data Caches

    Get PDF
    SBST (Software Based Self-Testing) is an effective solution for in-system testing of SoCs without any additional hardware requirement. SBST is particularly suited for embedded blocks with limited accessibility, such as cache memories. Several methodologies have been proposed to properly adapt existing March algorithms to test cache memories. Unfortunately they all leave the test engineers the task of manually coding them into the specific Instruction Set Architecture (ISA) of the target microprocessor. We propose an EDA tool for the automatic generation of assembly cache test program for a specific architectur

    Validation & Verification of an EDA automated synthesis tool

    Get PDF
    Reliability and correctness are two mandatory features for automated synthesis tools. To reach the goals several campaigns of Validation and Verification (V&V) are needed. The paper presents the extensive efforts set up to prove the correctness of a newly developed EDA automated synthesis tool. The target tool, MarciaTesta, is a multi-platform automatic generator of test programs for microprocessors' caches. Getting in input the selected March Test and some architectural details about the target cache memory, the tool automatically generates the assembly level program to be run as Software Based Self-Testing (SBST). The equivalence between the original March Test, the automatically generated Assembly program, and the intermediate C/C++ program have been proved resorting to sophisticated logging mechanisms. A set of proved libraries has been generated and extensively used during the tool development. A detailed analysis of the lessons learned is reporte

    Location-Verification and Network Planning via Machine Learning Approaches

    Full text link
    In-region location verification (IRLV) in wireless networks is the problem of deciding if user equipment (UE) is transmitting from inside or outside a specific physical region (e.g., a safe room). The decision process exploits the features of the channel between the UE and a set of network access points (APs). We propose a solution based on machine learning (ML) implemented by a neural network (NN) trained with the channel features (in particular, noisy attenuation values) collected by the APs for various positions both inside and outside the specific region. The output is a decision on the UE position (inside or outside the region). By seeing IRLV as an hypothesis testing problem, we address the optimal positioning of the APs for minimizing either the area under the curve (AUC) of the receiver operating characteristic (ROC) or the cross entropy (CE) between the NN output and ground truth (available during the training). In order to solve the minimization problem we propose a twostage particle swarm optimization (PSO) algorithm. We show that for a long training and a NN with enough neurons the proposed solution achieves the performance of the Neyman-Pearson (N-P) lemma.Comment: Accepted for Workshop on Machine Learning for Communications, June 07 2019, Avignon, Franc

    Modelling sustainable human development in a capability perspective

    Get PDF
    In this paper we model sustainable human development as intended in Sen's capability approach in a system dynamic framework. Our purpose is to verify the variations over time of some achieved functionings, due to structural dynamics and to variations of the institutional setting and instrumental freedoms (IF Vortex). The model is composed of two sections. The 'Left Side' one points out the 'demand' for functionings in an ideal world situation. The real world one, on the 'Right Side' indicates the 'supply' of functionings that the socio-economic system is able to provide individuals with. The general model, specifically tailored for Italy, can be simulated over desired time horizons: for each time period, we carry out a comparison between ideal world and real world functionings. On the basis of their distances, the model simulates some responses of decision makers. These responses, in turn influenced by institutions and instrumental freedoms, ultimately affect the dynamics of real world functionings, i.e. of sustainable human development.Functionings, Capabilities, Institutions, Instrumental Freedoms, Sustainable Human Development

    An area-efficient 2-D convolution implementation on FPGA for space applications

    Get PDF
    The 2-D Convolution is an algorithm widely used in image and video processing. Although its computation is simple, its implementation requires a high computational power and an intensive use of memory. Field Programmable Gate Arrays (FPGA) architectures were proposed to accelerate calculations of 2-D Convolution and the use of buffers implemented on FPGAs are used to avoid direct memory access. In this paper we present an implementation of the 2-D Convolution algorithm on a FPGA architecture designed to support this operation in space applications. This proposed solution dramatically decreases the area needed keeping good performance, making it appropriate for embedded systems in critical space application
    corecore